Back to index

Futureproof: 9 Rules for Humans in the Age of Automation

Tags: #technology #ai #future of work #automation #society #ethics #jobs

Authors: Kevin Roose

Overview

My book, ‘Futureproof: 9 Rules for Humans in the Age of Automation,’ examines the challenges and opportunities posed by the accelerating development of artificial intelligence and automation. I argue that while the future is not predetermined, many powerful players are not sufficiently focused on the potential downsides of AI. I aim to dispel the common misconception that this wave of technological change is inevitable and that we’re powerless to shape it. I address the anxieties of those concerned about losing their jobs to robots while advocating for a more nuanced perspective. By focusing on developing ‘uniquely human skills,’ we can not only survive but thrive in this new landscape. My 9 rules offer practical advice and concrete steps individuals and companies can take to adapt and stay ahead of the curve. Throughout the book, I emphasize that AI is a tool, and its impact will ultimately depend on the choices made by those who design, deploy, and regulate it. By becoming more ‘surprising, social, and scarce’ in our work and lives, we can resist ‘machine drift’—the tendency to become more predictable and less creative in the face of algorithms. We must demote our devices from masters to tools, leaving our own ‘handprints’ on our work rather than trying to out-hustle robots. We need to be mindful of the ways ‘boring bots,’ often overlooked, can cause significant harm. And most importantly, we must find ways to ‘arm the rebels,’—supporting the ethical technologists and advocates who are working to build a better future. Ultimately, my book is a call to action. It’s time for us to engage actively in shaping the future of technology, ensuring it serves human needs rather than simply corporate profits. By reclaiming our agency, nurturing our uniquely human capabilities, and working together to build a more equitable future, we can make ourselves ‘futureproof.’

Book Outline

1. Introduction

AI and automation are rapidly changing the world, not just in the obvious ways like self-driving cars, but also in more subtle ways like the ‘invisible automation’ that slowly makes jobs obsolete. This transformation isn’t necessarily a positive for everyone, as powerful, profit-driven actors are often the ones shaping how AI is implemented. They tend to undersell the risks and downsides of automation for workers, and are not always prioritizing human well-being in their AI-driven pursuits.

Key concept: The ‘Boomer Remover,’ a software program designed to optimize factory production by replacing entire teams of human planners, highlights a key concern in the age of automation: the prioritization of profit over people.

2. Birth of a Suboptimist

Many people are optimistic about AI and automation, and believe it will be a net positive for humanity. They point to historical examples where technology has created more jobs than it has destroyed. They argue that AI will make our jobs better by handling the ‘boring parts’ and freeing us up for more creative and meaningful work. They believe that humans and AI will collaborate, rather than compete, and that our limitless human needs will generate entirely new categories of jobs we can’t even imagine today. These are all compelling points, but there are reasons to be skeptical. It’s not guaranteed that this wave of automation will play out like previous ones, and we should be wary of the potential for harm, especially to vulnerable workers.

Key concept: The “productivity paradox”—the observation that US productivity growth has slowed over the past several decades despite rapid technological advancement—is often cited as evidence against the idea that automation is causing mass job losses. However, recent research suggests that this paradox may be due to the rise of ‘so-so technologies,’ which are good enough to replace human workers but not good enough to generate significant productivity gains that could create new jobs elsewhere in the economy.

3. The Myth of the Robot-Proof Job

The myth of the ‘robot-proof job’ is misguided. There’s a long history of experts failing to predict which jobs will and won’t be automated, and the belief that white-collar knowledge work is somehow immune from automation is dangerous. Focusing on broad occupational categories obscures the truth, which is that many seemingly ‘creative’ jobs are more routine and predictable than they first appear, and many seemingly ‘routine’ jobs actually require a complex mix of human skills that machines lack.

Key concept: There is no such thing as a truly “robot-proof job.” Rather than focusing on job titles, we need to think critically about the specific tasks that make up a job, and which of those tasks are likely to be automated. The tasks that machines excel at—those that are routine, predictable, and involve large data sets—are different from the tasks that humans excel at—those that require creativity, complex problem-solving, and social intelligence.

4. The Algorithmic Manager

We tend to think of robots as replacing workers who perform low-level grunt work, but increasingly, AI is being used to automate the tasks of middle management. This ‘algorithmic management,’ while not always a bad thing, does raise important questions about human autonomy and the potential for biases to be encoded into automated management systems.

Key concept: Software now acts as a kind of ‘algorithmic manager’ in a growing number of industries. This ranges from AI-powered coaching tools that provide real-time feedback to customer service representatives, to platforms like Uber and Lyft that use algorithms to manage pay, dispatching, and dispute resolution for drivers.

5. Beware of Boring Bots

The biggest threat to human workers may not come from the obvious, physically imposing robots we see in science fiction movies, but from the often-overlooked ‘boring bots’ that are already being used to automate essential tasks in our workplaces and government agencies. We should be wary of these bots, not because they are too powerful, but because, in some sense, they aren’t powerful enough—they automate jobs without generating significant productivity gains that could create new jobs elsewhere in the economy.

Key concept: Don’t worry about killer robots—worry about boring bots. ‘Bureaucratic bots,’ used by government agencies to make high-stakes decisions, and ‘back-office bots’ that automate repetitive office tasks, pose significant risks to workers and society as a whole. While they may seem mundane, these types of AI can cause widespread harm when implemented poorly or without proper oversight.

6. Don’t Be an Endpoint

The concept of humans as “endpoints”—the points of connection between machines that can’t yet communicate directly—highlights the vulnerability of workers whose jobs mainly consist of serving as bridges between pieces of software or incompatible technologies. As AI advances, these ‘endpoint’ jobs are likely to be automated away, making it imperative to think strategically about how to avoid getting stuck in these roles.

Key concept: “Humans quickly becoming expensive API endpoints.” - Chris Messina

7. Build Big Nets and Small Webs

When large-scale technological change disrupts industries and displaces workers, we need both “big nets” and “small webs” to cushion the blow. ‘Big nets’ are sweeping policy changes and social safety net programs, while ‘small webs’ are the interpersonal relationships, local community organizations, and mutual aid networks that help people during times of need.

Key concept: The rapid recovery of Waterloo, Canada, after the decline of Blackberry, its largest employer, demonstrates the importance of both ‘big nets’—social safety net programs like universal healthcare and generous unemployment benefits—and ‘small webs’—the informal, local networks of friends, neighbors, and community organizations that can support us during times of hardship.

8. Learn Machine-Age Humanities

In a world of increasingly sophisticated attention-grabbing technologies, we need to develop strong ‘machine-age humanities’ skills to thrive. ‘Attention guarding’—the ability to protect our focus from constant distractions—is essential, as is ‘room reading’—the art of understanding the social dynamics and power structures in a given situation.

Key concept: Focus is no longer enough—we need to practice ‘attention guarding,’ proactively protecting our focus from the constant barrage of distractions trying to capture it. Activities like meditation, breathing exercises, nature walks, and spending time away from screens can help us cultivate this crucial skill.

9. Arm the Rebels

In the face of a rapidly changing world, we have a choice to make. We can either withdraw from technology and try to preserve the status quo, like Henry David Thoreau, or we can actively engage with it and work to bend it toward a more just and equitable future, like Sarah Bagley. The strategy I advocate for is ‘arming the rebels’—supporting the people working on the front lines of responsible technology and social justice, and giving them the tools, data, and emotional support they need to fight for a better future.

Key concept: Rather than simply opposing technology, we should focus on ‘arming the rebels’—supporting ethical technologists, activists, and researchers who are working to ensure that AI and automation are used for good, and that their benefits are shared equitably.

Essential Questions

1. Are there any ‘robot-proof’ jobs, and how can we prepare for a future where AI is increasingly integrated into the workplace?

The book dismantles the idea of inherently ‘robot-proof’ jobs, arguing that the focus should shift from job titles to the tasks they comprise. By analyzing these tasks, we can determine their susceptibility to automation and identify opportunities for humans to add unique value. This involves cultivating skills that are ‘surprising, social, and scarce,’ emphasizing creativity, social intelligence, and specialized expertise that machines struggle to replicate.

2. What is ‘machine drift,’ and how can we mitigate the negative effects of excessive reliance on technology in our daily lives?

The book argues that excessive reliance on technology can lead to ‘machine drift’—a subtle but pervasive process where we become more predictable, less creative, and increasingly influenced by algorithms. To resist this drift, we must consciously create friction in our lives, demote our devices, and reclaim control over our attention, choices, and experiences.

3. How can we ensure that AI is implemented responsibly and ethically, minimizing potential harms and maximizing benefits for individuals and society?

While automation can bring efficiency and convenience, ‘Futureproof’ warns against overautomating and blindly entrusting AI with critical decisions. It calls for a cautious approach, emphasizing the need for robust oversight, rigorous testing, and ethical considerations when implementing AI systems, especially in high-stakes domains like healthcare, finance, and criminal justice.

4. What role can individuals and organizations play in shaping the future of AI and automation, ensuring they contribute to a more just and equitable society?

The book champions the idea of ‘arming the rebels’—supporting the ethical technologists, activists, and researchers working to make AI a force for good. This involves promoting transparency, accountability, and social justice in the development and deployment of AI, ensuring it empowers individuals and communities rather than serving solely corporate interests.

5. What are ‘machine-age humanities’ skills, and why are they crucial for thriving in an AI-driven future?

The book asserts that individuals can not only adapt to the changing landscape but also thrive by cultivating “machine-age humanities” skills. These skills, such as attention guarding, room reading, resting, digital discernment, analog ethics, and consequentialism, focus on developing uniquely human capabilities that are difficult for machines to replicate and are essential for navigating an increasingly complex, AI-driven world.

Key Takeaways

1. Leave Handprints on Your Work

In a world where many routine tasks can be automated, emphasizing the uniquely human aspects of our work becomes crucial. Highlighting the creativity, thoughtfulness, and effort behind our work, even in seemingly mundane tasks, distinguishes us from machines and adds value.

Practical Application:

A graphic designer could walk clients through their design process, explaining the thought process behind each decision and highlighting the expertise involved. A programmer could regularly practice explaining complex technical concepts to non-technical colleagues.

2. Resist the Pull of Frictionless Design and Algorithm-Driven Choices

Surrendering our decision-making to algorithms and frictionless systems can lead us to become more passive and predictable. Introducing intentional friction into our lives, such as setting aside time for ‘human hour’ activities, helps us reclaim our autonomy and break free from the ‘tyranny of convenience.’

Practical Application:

Instead of mindlessly clicking on suggested YouTube videos, take a break and engage in a hobby that requires focus and creativity. Challenge yourself to read a physical book instead of scrolling through social media.

3. Treat AI Like a Chimp Army—Smart but Unpredictable

AI is still far from perfect and can have unintended consequences when entrusted with too much authority or deployed without proper oversight. A cautious, ‘chimp army’ approach is necessary, acknowledging AI’s limitations and prioritizing human oversight, especially in high-stakes situations.

Practical Application:

Before deploying a new AI-powered customer service chatbot, conduct a thorough risk assessment, considering potential errors, biases, and edge cases. Include representatives from various departments, including customer service agents, in the design and testing process.

4. Support Ethical Technologists and ‘Arm the Rebels’

Ethical technologists and advocates working inside and outside tech companies are crucial for ensuring that AI is used responsibly and for the benefit of humanity. By supporting these ‘rebels’ through funding, advocacy, and raising awareness, we can help shape a more equitable and humane future for AI.

Practical Application:

Support organizations like the ACLU or the Electronic Frontier Foundation that are advocating for ethical AI development and fighting against algorithmic bias in areas like criminal justice and facial recognition.

Memorable Quotes

Birth of a Suboptimist. 21

“We’re trying to pull the robot out of people, and let people achieve greater things.”

The Myth of the Robot-Proof Job. 37

We humans are neural nets. What we can do, machines can do.

How Machines Really Replace Us. 51

It’s almost certain that some of the technologies in our lives today will end up costing humans their jobs, just as these tools did. An easy lesson to draw from history is that machines disrupt our lives in ways we don’t see coming.

Resist Machine Drift. 76

The main business of humanity is to do a good job of being human beings, not to serve as appendages to machines, institutions, and systems.

Learn Machine-Age Humanities. 107

“We’re training people to do machine things. We shouldn’t be doing that. We should be training people in uniquely human capabilities.”

Comparative Analysis

While ‘Futureproof’ shares common ground with books like ‘The Second Machine Age’ (Brynjolfsson & McAfee) and ‘Rise of the Robots’ (Martin Ford) regarding the transformative potential of AI and automation, it distinguishes itself by emphasizing the human element. Unlike those works which primarily analyze economic impacts and technological trends, ‘Futureproof’ focuses on how individuals can adapt and thrive in an AI-driven world. It delves deeper into the psychological and social consequences of automation, such as ‘machine drift’ and ‘idleness aversion.’ Additionally, it offers concrete strategies for resisting these forces and maintaining human agency, a perspective often absent in other discussions. It also aligns with ‘Humans Are Underrated’ (Geoff Colvin) in emphasizing the enduring value of uniquely human skills. However, it goes further by proposing specific strategies for cultivating these skills and integrating them into professional life. By weaving together history, psychology, and real-world examples, ‘Futureproof’ offers a unique and actionable roadmap for navigating the challenges and opportunities of the AI age.

Reflection

Roose’s ‘Futureproof’ provides a timely and thought-provoking exploration of the implications of AI and automation. His emphasis on human agency and the need to cultivate uniquely human skills is a welcome counterpoint to the often deterministic narratives surrounding technological advancement. However, his suboptimistic view might not fully account for the potential of AI to create new industries and unforeseen job opportunities. Additionally, his focus on individual adaptation could overshadow the need for systemic changes, such as robust social safety nets and ethical regulations, to ensure a more equitable distribution of AI’s benefits. Despite this, ‘Futureproof’ is a valuable contribution to the ongoing conversation about our technological future. Its focus on individual agency, coupled with practical advice and a nuanced understanding of both the risks and opportunities of AI, makes it an essential read for anyone navigating the complexities of the 21st-century workforce.

Flashcards

What is ‘machine drift’?

The tendency to become more predictable, less creative, and increasingly influenced by algorithms due to excessive reliance on technology.

What is ‘choice architecture’?

Subtle design elements in products or services that nudge users towards specific choices, potentially influencing their preferences and behaviors.

What is ‘phubbing’?

The act of ignoring someone in favor of using your phone, signifying a flight from open-ended, spontaneous conversation.

What is a ‘human endpoint’?

People whose jobs primarily involve connecting different pieces of software or bridging incompatible technologies, making them highly vulnerable to automation.

What are ‘boring bots’?

Simple, rule-based automation programs that lack AI’s adaptive learning capabilities but can still automate tasks, potentially displacing workers without significant productivity gains.

What does it mean to ‘leave handprints’?

Prioritizing the obvious human effort behind a product or service, emphasizing craftsmanship, personalization, and social connection, to differentiate from machine-made alternatives.

What is ‘attention guarding’?

Proactively protecting our attention from distractions, a crucial skill for navigating a world filled with attention-grabbing technologies.

What is ‘room reading’?

Understanding the social dynamics, nonverbal cues, and power structures in a given situation, a skill honed by those who have had to code-switch or navigate complex social environments.

What does it mean to ‘arm the rebels’?

Supporting ethical technologists, activists, and researchers working to ensure AI is used for social good and its benefits are distributed equitably.